212 research outputs found

    Two mathematical tools to analyze metastable stochastic processes

    Full text link
    We present how entropy estimates and logarithmic Sobolev inequalities on the one hand, and the notion of quasi-stationary distribution on the other hand, are useful tools to analyze metastable overdamped Langevin dynamics, in particular to quantify the degree of metastability. We discuss the interest of these approaches to estimate the efficiency of some classical algorithms used to speed up the sampling, and to evaluate the error introduced by some coarse-graining procedures. This paper is a summary of a plenary talk given by the author at the ENUMATH 2011 conference

    Accelerated dynamics: Mathematical foundations and algorithmic improvements

    Full text link
    We present a review of recent works on the mathematical analysis of algorithms which have been proposed by A.F. Voter and co-workers in the late nineties in order to efficiently generate long trajectories of metastable processes. These techniques have been successfully applied in many contexts, in particular in the field of materials science. The mathematical analysis we propose relies on the notion of quasi stationary distribution

    Long-time convergence of an adaptive biasing force method: Variance reduction by Helmholtz projection

    Get PDF
    In this paper, we propose an improvement of the adaptive biasing force (ABF) method, by projecting the estimated mean force onto a gradient. The associated stochastic process satisfies a non linear stochastic differential equation. Using entropy techniques, we prove exponential convergence to the stationary state of this stochastic process. We finally show on some numerical examples that the variance of the approximated mean force is reduced using this technique, which makes the algorithm more efficient than the standard ABF method.Comment: 33 pages, 20 figure

    Enhanced sampling of multidimensional free-energy landscapes using adaptive biasing forces

    Full text link
    We propose an adaptive biasing algorithm aimed at enhancing the sampling of multimodal measures by Langevin dynamics. The underlying idea consists in generalizing the standard adaptive biasing force method commonly used in conjunction with molecular dynamics to handle in a more effective fashion multidimensional reaction coordinates. The proposed approach is anticipated to be particularly useful for reaction coordinates, the components of which are weakly coupled, as illuminated in a mathematical analysis of the long-time convergence of the algorithm. The strength as well as the intrinsic limitation of the method are discussed and illustrated in two realistic test cases

    Free-energy-dissipative schemes for the Oldroyd-B model

    Get PDF
    In this article, we analyze the stability of various numerical schemes for differential models of viscoelastic fluids. More precisely, we consider the prototypical Oldroyd-B model, for which a free energy dissipation holds, and we show under which assumptions such a dissipation is also satisfied for the numerical scheme. Among the numerical schemes we analyze, we consider some discretizations based on the log-formulation of the Oldroyd-B system proposed by Fattal and Kupferman, which have been reported to be numerically more stable than discretizations of the usual formulation in some benchmark problems. Our analysis gives some tracks to understand these numerical observations

    Greedy algorithms for high-dimensional eigenvalue problems

    Full text link
    In this article, we present two new greedy algorithms for the computation of the lowest eigenvalue (and an associated eigenvector) of a high-dimensional eigenvalue problem, and prove some convergence results for these algorithms and their orthogonalized versions. The performance of our algorithms is illustrated on numerical test cases (including the computation of the buckling modes of a microstructured plate), and compared with that of another greedy algorithm for eigenvalue problems introduced by Ammar and Chinesta.Comment: 33 pages, 5 figure

    Optimal scaling for the transient phase of the random walk Metropolis algorithm: The mean-field limit

    Full text link
    We consider the random walk Metropolis algorithm on Rn\mathbb{R}^n with Gaussian proposals, and when the target probability measure is the nn-fold product of a one-dimensional law. In the limit n→∞n\to\infty, it is well known (see [Ann. Appl. Probab. 7 (1997) 110-120]) that, when the variance of the proposal scales inversely proportional to the dimension nn whereas time is accelerated by the factor nn, a diffusive limit is obtained for each component of the Markov chain if this chain starts at equilibrium. This paper extends this result when the initial distribution is not the target probability measure. Remarking that the interaction between the components of the chain due to the common acceptance/rejection of the proposed moves is of mean-field type, we obtain a propagation of chaos result under the same scaling as in the stationary case. This proves that, in terms of the dimension nn, the same scaling holds for the transient phase of the Metropolis-Hastings algorithm as near stationarity. The diffusive and mean-field limit of each component is a diffusion process nonlinear in the sense of McKean. This opens the route to new investigations of the optimal choice for the variance of the proposal distribution in order to accelerate convergence to equilibrium (see [Optimal scaling for the transient phase of Metropolis-Hastings algorithms: The longtime behavior Bernoulli (2014) To appear]).Comment: Published at http://dx.doi.org/10.1214/14-AAP1048 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The parallel replica method for simulating long trajectories of Markov chains

    Full text link
    The parallel replica dynamics, originally developed by A.F. Voter, efficiently simulates very long trajectories of metastable Langevin dynamics. We present an analogous algorithm for discrete time Markov processes. Such Markov processes naturally arise, for example, from the time discretization of a continuous time stochastic dynamics. Appealing to properties of quasistationary distributions, we show that our algorithm reproduces exactly (in some limiting regime) the law of the original trajectory, coarsened over the metastable states.Comment: 13 pages, 6 figure

    A nonintrusive Reduced Basis Method applied to aeroacoustic simulations

    Full text link
    The Reduced Basis Method can be exploited in an efficient way only if the so-called affine dependence assumption on the operator and right-hand side of the considered problem with respect to the parameters is satisfied. When it is not, the Empirical Interpolation Method is usually used to recover this assumption approximately. In both cases, the Reduced Basis Method requires to access and modify the assembly routines of the corresponding computational code, leading to an intrusive procedure. In this work, we derive variants of the EIM algorithm and explain how they can be used to turn the Reduced Basis Method into a nonintrusive procedure. We present examples of aeroacoustic problems solved by integral equations and show how our algorithms can benefit from the linear algebra tools available in the considered code.Comment: 28 pages, 7 figure
    • …
    corecore